約 5,346,686 件
https://w.atwiki.jp/chapati4it/pages/512.html
Javaで日付・時刻を扱う場合、大雑把に以下の3つのクラスを組み合わせて使います。 日付・時刻を扱うクラス「java.util.Date」 カレンダークラス「java.util.Calendar」 文字列との変換クラス「java.text.SimpleDateFormat」 ■目次 システム時刻を取得 システム時刻を文字列に変換 文字列から時刻(Date型)に変換 カレンダークラスを取得 + カレンダークラスの内容を出力 カレンダークラスで日付の計算 カレンダークラスとDate型の変換 文字列の日付の3ヶ月後を計算して文字列の日付に戻すサンプル 日付の比較(Date型同士) ideoneafter関数 before関数 compareTo関数 Date型が内部で扱うlong値を使った比較 文字列の日付が実際に存在する日付か確認する システム時刻を取得 // システム時刻を取得(現在時刻が 2013/11/22 01 01 51.929 の場合) Date date = new Date(); // システム時刻を出力してみる System.out.println(date.getTime()); // 結果 - 1385049711929 System.out.println(date); // 結果 - Fri Nov 22 01 01 51 JST 2013 Javaでシステム時刻(PCに設定されている時刻)を取得するには、Dateクラスを新しく作るだけでOKです。ただし、Dateクラスは日時を内部でlongの整数で扱うため、そのままでは人間には理解できず、Dateクラスをそのまま文字列に変換すると、日本人にはあまり馴染みのない形式で出力されます。 システム時刻を文字列に変換 // システム日付を取得 Date date = new Date(); // 変換クラスを作成 SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd hh mm ss.SSS"); // Date型から文字列に変換 String str = sdf.format(date); System.out.println(str); // 結果 - 2013/11/22 01 01 51.929 Dateクラスを日本人に馴染みのある形式の文字列にするには、SimpleDateFormatクラスのformat関数を使います。SimpleDateFormatクラスを作成する際のパラメータに、上記のようなパラメータを渡すことで変換する書式を指定できます。書式のパラメータに「yyyyMMdd」なら「20131122」に変換されます。 テキストファイルや画面に時刻を表示する際にも文字列への変換は必要になります。 文字列から時刻(Date型)に変換 // 日付文字列(yyyy/MM/dd形式) String str = "2013/11/22"; // 変換クラスを作成 SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); // 文字列からDate型に変換 Date date = sdf.parse(str); // 変換結果を出力してみる System.out.println(date.getTime()); // 結果 - 1385046000000 System.out.println(date); // 結果 - Fri Nov 22 00 00 00 JST 2013 文字列からDate型に変換するには、SimpleDateFormatクラスのparse関数を使います。日時の比較や、3日後や1ヶ月後を求めるには文字列のままではやりにくいので変換も必要になります。 カレンダークラスを取得 + カレンダークラスの内容を出力 // カレンダークラスを取得 Calendar cal = Calendar.getInstance(); // カレンダークラスの内容を出力してみよう(現在時刻が 2013/11/22 01 01 51.929 の場合) // 年 結果 - 2013 System.out.println(cal.get(Calendar.YEAR)); // 月 結果 - 10 System.out.println(cal.get(Calendar.MONTH)); // 日 結果 - 22 System.out.println(cal.get(Calendar.DAY_OF_MONTH)); // 時 結果 - 1 System.out.println(cal.get(Calendar.HOUR_OF_DAY)); // 分 結果 - 1 System.out.println(cal.get(Calendar.MINUTE)); // 秒 結果 - 51 System.out.println(cal.get(Calendar.SECOND)); // ミリ秒 結果 - 929 System.out.println(cal.get(Calendar.MILLISECOND)); カレンダークラスはDate型と違って「new Calendar()」とはしません。「Calendar.getInstance()」でカレンダークラスを作成します。Date型と同じように、作成した時点でシステム時刻が設定されています。 カレンダークラスは年・月・日・時・分・秒・ミリ秒などを上記のように「cal.get(Calendar.YEAR)」とすることで個別に取得できるのが特徴です。この時注意が必要なのが「月」で、月はなぜか1月なら 0、2月なら 1、のように実際の月より1少ない数字が取得できます。 カレンダークラスで日付の計算 // 3年後の計算 cal.add(Calendar.YEAR, 3); // 3年前の計算 cal.add(Calendar.YEAR, -3); // 3ヶ月後の計算 cal.add(Calendar.MONTH, 3); // 3ヶ月前の計算 cal.add(Calendar.MONTH, -3); // 3日後の計算 cal.add(Calendar.DAY_OF_MONTH, 3); // 3日前の計算 cal.add(Calendar.DAY_OF_MONTH, -3); // 3時間後の計算 cal.add(Calendar.HOUR_OF_DAY, 3); // 3時間前の計算 cal.add(Calendar.HOUR_OF_DAY, -3); // 3分後の計算 cal.add(Calendar.MINUTE, 3); // 3分前の計算 cal.add(Calendar.MINUTE, -3); // 3秒後の計算 cal.add(Calendar.SECOND, 3); // 3秒前の計算 cal.add(Calendar.SECOND, -3); カレンダークラスのもう一つの特徴は、3日後や3日前といった日付の計算が簡単にできることです。add関数に 3 を入れれば3日後や3ヶ月後、-3なら3日前や3ヶ月前の計算ができます。 カレンダークラスとDate型の変換 // カレンダークラスからDate型に変換 Date date = cal.getTime(); // Date型の値をカレンダークラスに設定 cal.setTime(date); カレンダークラスとDate型のやりとりは、getTime、setTime関数を使うだけでとても簡単に行えます。 文字列の日付の3ヶ月後を計算して文字列の日付に戻すサンプル import java.text.*; import java.util.*; public class DateTimeSample { public static void main(String[] args) throws ParseException { // 文字列の日付 String str = "2013/11/22"; // フォーマッタを作成 SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); // 文字列の日付からDate型に変換 Date date = sdf.parse(str); // カレンダークラス作成 Calendar cal = Calendar.getInstance(); // カレンダークラスにDateの値を設定 cal.setTime(date); // 3ヶ月後を計算 cal.add(Calendar.MONTH, 3); // カレンダークラスからDate型を取得 date = cal.getTime(); // Date型から文字列の日付に変換 String str2 = sdf.format(date); // 変換結果を出力 結果 - 2014/02/22 System.out.println(str2); } } これまでに挙げてきた、文字列からDate型への変換、カレンダークラスにDate型の値を設定、日付の計算、カレンダークラスからDate型を取得、Date型から文字列への変換を組み合わせるとこんな事も可能です。 日付の比較(Date型同士) ideone after関数 // date1が、date2より後の日時であれば true if (date1.after(date2)) System.out.println("date1はdate2より後の日時です"); else System.out.println("date1はdate2と同じか前の日時です"); before関数 // date1が、date2より前の日時であれば true if (date1.before(date2)) System.out.println("date1はdate2より前の日時です"); else System.out.println("date1はdate2と同じか後の日時です"); after関数とbefore関数です、after関数は、基となるDateがパラメータのDateよりも後(after)であれば true が返ります。before関数はその逆です。管理人はあまり使いません。 compareTo関数 // date1 = date2なら0 if (date1.compareTo(date2) == 0) System.out.println("date1とdate2は同じ日時です"); // date1 date2ならマイナスの値 if (date1.compareTo(date2) 0) System.out.println("date1はdate2より前の日時です"); // date1 date2ならプラスの値が返却されます if (date1.compareTo(date2) 0) System.out.println("date1はdate2より後の日時です"); compareTo関数は、同じなら 0、date1 date2 ならマイナスの値、date1 date2 ならプラスの値が返ってきます。compareTo関数一つで同じか、過去か未来かを判断できる便利な関数ですが、管理人は過去と未来がどっちがどっちか分からなくなるのであまり使いません。 Date型が内部で扱うlong値を使った比較 if (date1.getTime() == date2.getTime()) System.out.println("date1とdate2は同じ日時です"); if (date1.getTime() date2.getTime()) System.out.println("date1はdate2より後の日時です"); if (date1.getTime() date2.getTime()) System.out.println("date1はdate2より前の日時です"); Date型は内部でlongを使って時刻を保持します。getTime関数でそのlong値を取得できるので、これを比較演算子(== != など)で比較する方法です。管理人はこの方法が見た目で過去未来をどのように判定しようとしているか分かりやすいのでよく使います。 文字列の日付が実際に存在する日付か確認する String str1 = "2013/11/31"; // 存在しない日付 // 変換クラスを作成 SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); // 文字列をDate型に変換 Date date = sdf.parse(str1); // Date型から文字列に再変換 String str2 = sdf.format(date); // 再変換結果 2013/12/01 // 再変換した文字列と元の文字列で比較 if (str2.equals(str2)) System.out.println(str2 + "は、存在する日付です。"); else System.out.println(str2 + "は、存在しない日付です。"); SimpleDateFormatで存在しない日時を変換すると、実在する日時を越えた分だけ進めたDate型に変換されます。このDate型を文字列に戻すと、例えば「2013/11/31」は「2013/12/01」に変換されるので、変換前と返還後の文字列を比較して、同じであれば存在する日付、同じでなければ存在しない日付と判断できます。
https://w.atwiki.jp/yasrun/pages/178.html
SimpleDateFormatが5桁の西暦を受け付ける こんな感じのプログラムを作った。 日付の範囲を指定して、データベースからデータを取得するような処理だ。 入力情報である日付Fromと日付ToはdateCheckメソッドにてチェックされ、不正な日付の場合は代わりの日付が指定される仕組みだ。 /** データ取得処理 */ public List Object select(Connection conn, String dateFrom, String dateTo) { List Object result = new ArrayList Object (); dateFrom = dateCheck(dateFrom, "2000/01/01"); dateTo = dateCheck(dateTo, "2020/12/31"); String sql = "SELECT * FROM TARGET_TABLE " + " WHERE ENTRY_DATE BETWEEN TO_DATE(?, yyyy/mm/dd ) AND TO_DATE(?, yyyy/mm/dd )"; PreparedStatement ps = null; ResultSet rs = null; try { ps = conn.prepareStatement(sql); ps.setString(1, dateFrom); ps.setString(2, dateFrom); rs = ps.executeQuery(); // データ取得&リスト格納処理 // ... } catch (SQLException e) { } finally { // クローズ処理 } return result; } /** 日付チェック */ private String dateCheck(String dateString, String alternativeDateString) { String result = ""; DateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); sdf.setLenient(false); try { sdf.parse(dateString); result = dateString; } catch (ParseException e) { result = alternativeDateString; } return result; } どこが問題かというと、「20001/1/1」みたいな、西暦年が5桁の日付を与えてみたところ、dateCheckをクリアしてしまった、というところだ。 どうやらSimpleDateFormatは西暦年が5桁でもしれっと処理してしまうらしい。 こんなプログラムも書いてみたが↓ public static void main(String[] args) throws ParseException { String s = "20001/1/1"; SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); sdf.setLenient(false); Date d = sdf.parse(s); System.out.println(d); } コンソールには「Mon Jan 01 00 00 00 JST 20001」と表示された。 で、これだけならいいのだが、データベース(発見時はOracleを使用)が西暦年5桁は受け入れてくれなかったりする。すなわち、最初に例示した類のプログラムを実行すると java.sql.SQLException ORA-01861 リテラルが書式文字列と一致しません になってしまう。 解決するためには、正規表現によるチェックか文字列の長さチェックでSimpleDateFormat#parseの穴を補うしかないかもしれない。また、値が画面から入力される類のものであるならば、その画面に入力制限をつけて不正な日付を入力させない、などの処置が必要になると思う。 2020/05/19 追記 SQLの yyyy を yyyyy に変えたら解決しないかなぁ?
https://w.atwiki.jp/lspdfrinfo/pages/75.html
SimpleHUD Simple HUDはVenoxity氏によって制作された、画面にプレイヤーの位置、コンパス、時刻などを表示させるスクリプトである。 Player Location DisplayやRageShowMyLocationという似たようなmodがあるが、ゲーム内言語が日本語だと起動せず、バグが起きてしまうため使えなかった。 だがこのSimple HUDなら日本語でも動作する。ただし、Better japanese fontを導入しないと文字化けしてしまうため注意。 また、前提modが多くいるため注意。 ページ編集時の一般公開されている最新バージョン 1.1.9 Simple HUDダウンロードリンク https //www.lcpdfr.com/downloads/gta5mods/scripts/39944-simplehud/ Better japanese fontダウンロードリンク https //www.gta5-mods.com/misc/better-japanese-font-kagikn アップデートによる重要な変更点 最新バージョン"1.19"が配信されました。 このアップデートで追加された新たなテキストが、ゲーム内での設定および位置の変更ができません。 修正を待つか、iniに直接書き込んで設定してください。 尚、今まで設定可能だったテキストは変わらずゲーム内で変更可能です。 必須 ScriptHook V ┗ 別ダウンロード必須 ScriptHook V.NET ┗ 別ダウンロード必須 LemonUI 2.0 ┗ 別ダウンロード必須 openIV ┗ 別ダウンロード必須 Newtonsoft.Json.dll ┗ 同梱 NAudio.dll ┗ 同梱 推奨 SimpleCTRL ┗ 制作者様の別のプラグイン。旧バージョンにあったスピードメーター機能やその他さまざまな車両関連の機能搭載。 インストール方法 解凍したフォルダ内のGrand Theft Auto Vフォルダを開き、NightlyかStableのどちらかを選び、 選んだフォルダ内のscriptsフォルダをメインディレクトリにドラッグ&ドロップ。 その後、Props Textures/SimpleHUD - (Base Version)/Installを開きopenIVを起動。 フォルダ内にある「SimpleHUD - PART ONE.oiv」、「SimpleHUD - PART TWO.oiv」ファイルを openIVのウィンドウにドラッグ ドロップし、modsフォルダにインストール。 その後、LemonUIをダウンロードする。 LemonUIをダウンロードしたら解凍し、中にあるフォルダの「SHVDN3」というフォルダ内にある「.dll」ファイル、「.pdb」ファイル、「.xml」ファイルを メインディレクトリ/Scriptsの中にドラッグ&ドロップ。 これで完了。 プラグインと依存関係 メインディレクトリまで 一部前提ファイルは割愛。 ┣ ScriptHookV.dll (言うことなし) ┣ ScriptHookV.Net (言うことなし) ┗ Scripts ┣ LemonUI.SHVDN3.dll (プラグインの動作に必要なインターフェイス) ┣ LemonUI.SHVDN3.pdb (プラグインの動作に必要なインターフェイス) ┣ LemonUI.SHVDN3.xml (プラグインの動作に必要なインターフェイス) ┣ SimpleHUD.dll (プラグイン本体) ┣ SimpleHUD.ini (プラグインの構成設定) ┣ Newtonsoft.Json.dll (プラグインの動作に必要なライブラリ) ┣ NAudio.dll (プラグインの動作に必要なライブラリ) ┗ SimpleHUD ┣ audio (速度制限アラートの音声ファイルが入ったフォルダ) ┣ data (マップやルート表示のデータが入ったフォルダ) ┗ settings (郵便番号や速度制限の設定データが入ったフォルダ) SimpleHUD.ini [LIMIT] AnnounceSpeedLimitWarning=false ...速度制限によるアラートが有効かどうか。初期設定は無効(false)。 AmountOverSpeedLimit=10 ...道路の速度制限からどのくらい速いと制限違反となるか。初期設定は10。 HighSpeedAlertAudio=true ...速度制限によるアラートの音が鳴るかどうか。初期設定は有効(true)。 DisplayOnlyOnChange=false ...走行中に速度制限が変更されたときのみ速度制限が表示されるかどうか。初期設定は無効(false)。 つまりfalseの場合、常に表示される。 ChangeDisplayTime=3000 ...上記の設定がtrueの場合、速度制限の変更があってからどのくらい表示されるか。初期設定は3000。(ミリ秒) ShowOnlyinVehicle=true ...速度制限の表示が車両内のみ表示されるかどうか。初期設定は有効(true)。 LimitLocale=US ...速度制限の表示されるマーク。初期設定はUS(アメリカ)。 LimitEnabled=true ...速度制限の表示が有効かどうか。初期設定は有効(true)。 [DIRECTION] DirectionPosX=0.172 DirectionPosY=0.942 DirectionPosX_RadarLrg=0.262 DirectionPosY_RadarLrg=0.932 ...自分の向いている方角の表示の画面の位置。 DirectionText=|{0}| ...カスタムテキスト。" | "の部分を変更すると方角の区切りのテキストを変更できる。 DirectionFont=FixedWidthNumbersStyle ...自分の向いている方角の表示のフォント。日本語にしたい場合は"ChaletComprimeCologne"と入力しよう。 DirectionScale=0.30 ...自分の向いている方角の表示の大きさ。 DirectionEnabled=true ...自分の向いている方角が表示されるかどうか。初期設定は有効(true)。 [COMPASS] CompassEnabled=false ...コンパスが表示されるかどうか。初期設定は無効(false)。 現在開発中とのこと。 [ROAD] RoadPosX=0.190 RoadPosY=0.940 RoadPosX_RadarLrg=0.284 RoadPosY_RadarLrg=0.932 ...画面上に自分のいる道路名(○○○○ストリート等)が表示される位置。 RoadFont=Leaderboard ...画面上に自分のいる道路名の表示フォント。日本語にしたい場合は"ChaletComprimeCologne"と入力しよう。 RoadScale=0.30 ...画面上に自分のいる道路名表示の大きさ。 RoadEnabled=true ...画面上に自分のいる道路名が表示されるかの設定。初期設定は有効(true)。 [GENERAL] PrimaryColor=~c~ ...HUD表示のプライマリカラー。 SecondaryColor=~s~ ...HUD表示のセカンダリカラー。 ShowOnlyinVehicle=false ...自分のいる地名が車両内のみ表示されるかどうか。初期設定は無効(false)。 [POSTAL] PostalPosX=0.174 PostalPosY=0.922 PostalPosX_RadarLrg=0.262 PostalPosY_RadarLrg=0.912 ...画面上に自分から近い郵便番号が表示される位置。 PostalText=Nearby Postal ...郵便番号のカスタムテキスト。初期設定は「Nearby Postal」。日本語非対応。 PostalFont=Leaderboard ...画面上に自分から近い郵便番号が表示されるフォント。日本語にしたい場合は"ChaletComprimeCologne"と入力しよう。 PostalScale=0.26 ...画面上に自分から近い郵便番号表示の大きさ。 PostalCompact=false ...カスタムテキストを消し、ROADと一緒に表示するかどうか。初期設定は無効(false)。 PostalEnabled=true ...画面上に自分から近い郵便番号が表示されるかの設定。初期設定は有効(true)。 [TIME] TimePosX=0.172 TimePosY=0.96 TimePosX_RadarLrg=0.262 TimePosY_RadarLrg=0.96 ...画面上に表示するゲーム内の現在時刻を表示する位置。 TimeFont=Leaderboard ...画面上に表示するゲーム内の現在時刻表示のフォント。日本語にしたい場合は"ChaletComprimeCologne"と入力しよう。 TimeScale=0.28 ...画面上に表示するゲーム内の現在時刻表示の大きさ。 TimeFormat=24h ...画面上に表示するゲーム内の現在時刻表示の周期。初期設定は24時間。 TimeInGameFormat=true ...時刻表示がゲーム内の時間を示すかどうか。初期設定は有効(true)。 falseの場合、パソコンの設定された時間が表示される。 TimeEnabled=true ...画面上にゲーム内の現在時刻を表示するかどうか。初期設定は有効(true)。 [COUNTY] CountyPosX=0.197 CountyPosY=0.96 CountyPosX_RadarLrg=0.262 CountyPosY_RadarLrg=0.96 ...画面上に自分がいる郡を表示する位置。 CountyScale=0.28 ......画面上に自分がいる郡の表示の大きさ。 CountyEnabled=true ...現在地が何郡なのかを表示するかどうか。初期設定は有効(true)。 現在の最新バージョン1.19では、ゲーム内での変更不可能。 [AOP] AOPPosX=0.175 AOPPosY=0.904 AOPPosX_RadarLrg=0.262 AOPPosY_RadarLrg=0.912 ...AOP(Area of Play)の表示する位置。 AOP=自分の現在地はどこの警察が担当するのかの表示。 AOPText=AOP ...AOP表示のカスタムテキスト。初期設定は「AOP」 AOPFont=Leaderboard ...AOP表示のフォント。日本語にしたい場合は"ChaletComprimeCologne"と入力しよう。 AOPScale=0.26 ...AOP表示の大きさ。 AOPEnabled=false ...AOPを表示するかどうか。初期設定は無効(false)。 現在の最新バージョン1.19では、ゲーム内での変更不可能。 [ZONE] ZoneEnabled=true ...現在地のゾーン(デルペロ、アルタなどの表記)を表示するかどうか。初期設定は有効(true)。 [NOTIFICATIONS] NotificationTimeout=10000 ...速度が速すぎる際に速度を落とす注意喚起通知が表示されてから、何秒で消えるか。初期設定は10000ミリ秒。(10秒) HighSpeedAlertNotification=true ...速度が速すぎる際に速度を落とす注意喚起通知が表示されるかどうか。初期設定は有効(true)。 [MENU] ToggleKey=F10 ModifierKey=Shift ...ゲーム内で設定するためのメニューを表示するキーと修飾キー。初期設定はF10+シフトキー。 MenuEnabled=false ...ゲーム内で設定するメニューが有効かどうかの設定。初期設定は無効(false)。 ゲーム内で表示の位置を設定する場合はここをtrueにしよう。
https://w.atwiki.jp/twilightdaicon/pages/7.html
TitleFormat 私的日本語訳 Foobar2000 Titleformat Reference From Hydrogenaudio Knowledgebase Contents [hide] 1 Field remappings 1.1 Metadata 1.1.1 %album artist% 1.1.2 %album% 1.1.3 %artist% 1.1.4 %disc% 1.1.5 %discnumber% 1.1.6 %track artist% 1.1.7 %title% 1.1.8 %track% 1.1.9 %tracknumber% 1.2 Technical information 1.2.1 %bitrate% 1.2.2 %channels% 1.2.3 %filesize% 1.2.4 %samplerate% 1.2.5 %codec% 1.3 Special fields 1.3.1 %playlist_number% 2 Control flow 2.1 [...] (conditional section) 2.2 $if(cond,then) 2.3 $if(cond,then,else) 2.4 $if2(a,else) 2.5 $if3(a1,a2,...,aN,else) 2.6 $ifgreater(n1,n2,then,else) 2.7 $iflonger(s1,s2,then,else) 2.8 $select(n,a1,...,aN) 3 Arithmetic functions 3.1 $add(a,b) 3.2 $div(a,b) 3.3 $greater(a,b) 3.4 $max(a,b) 3.5 $min(a,b) 3.6 $mod(a,b) 3.7 $mul(a,b) 3.8 $muldiv(a,b,c) 3.9 $rand() 3.10 $sub(a,b) 4 Boolean functions 4.1 $and(...) 4.2 $or(...) 4.3 $not(x) 4.4 $xor(...) 5 Color functions 5.1 $blend(color1,color2,part,total) 5.2 $rgb() 5.3 $rgb(r,g,b) 5.4 $rgb(r1,g1,b1,r2,g2,b2) 5.5 $transition(string,color1,color2) 6 Now playing info 6.1 Special fields 6.1.1 %_time_elapsed% 6.1.2 %_time_remaining% 6.1.3 %_time_total% 6.1.4 %_time_elapsed_seconds% 6.1.5 %_time_remaining_seconds% 6.1.6 %_time_total_seconds% 6.1.7 %_ispaused% 7 Playlist info 7.1 Special fields 7.1.1 %isplaying% 7.1.2 %_ispaused% 7.1.3 %_playlist_number% 7.1.4 %_playlist_total% 7.1.5 %playlist_name% 8 String functions 8.1 $abbr(x) 8.2 $abbr(x,len) 8.3 $ansi(x) 8.4 $caps(x) 8.5 $caps2(x) 8.6 $char(x) 8.7 $crlf() 8.8 $cut(a,len) 8.9 $directory(x) 8.10 $directory(x,n) 8.11 $ext(x) 8.12 $filename(x) 8.13 $fix_eol(x) 8.14 $fix_eol(x,indicator) 8.15 $hex(n) 8.16 $hex(n,len) 8.17 $insert(a,b,n) 8.18 $left(a,len) 8.19 $len(a) 8.20 $len2(a) 8.21 $longer(a,b) 8.22 $lower(a) 8.23 $longest(a,...) 8.24 $num(n,len) 8.25 $pad(x,len) 8.26 $pad_right(x,y) 8.27 $pad(x,len,char) 8.28 $pad_right(x,len,char) 8.29 $padcut(x,len) 8.30 $padcut_right(x,len) 8.31 $progress(pos,range,len,a,b) 8.32 $progress2(pos,range,len,a,b) 8.33 $repeat(a,n) 8.34 $replace(a,b,c) 8.35 $right(a,len) 8.36 $roman(n) 8.37 $shortest 8.38 $strchr(s,c) 8.39 $strrchr(s,c) 8.40 $strstr(s1,s2) 8.41 $strcmp(s1,s2) 8.42 $stricmp(s1,s2) 8.43 $substr(s,m,n) 8.44 $trim(s) 8.45 $tab() 8.46 $tab(n) 8.47 $upper(s) 9 Track info 9.1 Metadata 9.1.1 $meta(name) 9.1.2 $meta(name,n) 9.1.3 $meta_sep(name,sep) 9.1.4 $meta_sep(name,sep,lastsep) 9.1.5 $meta_test(...) 9.1.6 $meta_num(name) 9.1.7 $tracknumber() 9.1.8 $tracknumber(n) 9.2 Technical information 9.2.1 $info(name) 9.2.2 $codec() 9.2.3 $channels() 9.2.4 %__replaygain_album_gain% 9.2.5 %__replaygain_album_peak% 9.2.6 %__replaygain_track_gain% 9.2.7 %__replaygain_track_peak% 9.3 Special fields 9.3.1 $extra(name) 9.3.1.1 filename 9.3.1.2 filename_ext 9.3.1.3 directoryname 9.3.1.4 path 9.3.1.5 path_raw 9.3.1.6 subsong 9.3.1.7 foobar2000_version 9.3.1.8 length 9.3.1.9 length_ex 9.3.1.10 length_seconds 9.3.1.11 length_seconds_fp 9.3.1.12 length_samples 10 Variable operations 10.1 $get(name) 10.2 $put(name,value) 10.3 $puts(name,value) 11 Component-provided fields and functions on tracks 11.1 Playback statistics 12 Component-specific fields and functions 12.1 Album list 12.2 Columns UI [edit]Field remappings Some of the fields accessible through %name% are remapped to other values to make writing titleformat scripts more convenient. [edit]Metadata [edit]%album artist% Defined as $if3($meta(album artist),$meta(artist),$meta(composer),$meta(performer)). [edit]%album% Defined as $if3($meta(album),$meta(venue)). [edit]%artist% Defined as $if3($meta(artist),$meta(album artist),$meta(composer),$meta(performer)). [edit]%disc% Returns the discnumber. The discnumber is taken from the discnumber tag; if that does not exist, it is taken from the disc tag. If neither exist, the field is undefined. This is equivalent to the %discnumber% remapping. [edit]%discnumber% Returns the discnumber. The discnumber is taken from the discnumber tag; if that does not exist, it is taken from the disc tag. If neither exist, the field is undefined. This is equivalent to the %disc% remapping. [edit]%track artist% Defined as $meta(artist), if $meta(album artist) is different than $meta(artist), otherwise this field is empty. [edit]%title% Defined as $if2($meta(title),%_filename%). Returns the title tag if available, otherwise it returns the filename excluding the extension. [edit]%track% Returns the tracknumber padded to two digits from the left with zeroes. The tracknumber is taken from the tracknumber tag; if that does not exist, it is taken from the track tag. If neither exist, this field is undefined. This is equivalent to the %tracknumber% remapping. [edit]%tracknumber% Returns the tracknumber padded to two digits from the left with zeroes. The tracknumber is taken from the tracknumber tag; if that does not exist, it is taken from the track tag. If neither exist, this field is undefined. This is equivalent to the %track% remapping. [edit]Technical information [edit]%bitrate% Defined as $if2($info(bitrate_dynamic),$info(bitrate)). Returns the current bitrate, if available, otherwise it returns the average bitrate. If neither is available, nothing is returned. [edit]%channels% Defined as $channels(). Returns the number of channels in text form; returns "mono" and "stereo" instead of "1" and "2". [edit]%filesize% Defined as %_filesize%. Returns the filesize in bytes. [edit]%samplerate% Defined as $info(samplerate). Returns the samplerate in Hz. [edit]%codec% Defined as $codec(). [edit]Special fields [edit]%playlist_number% Defined as $num(%_playlist_number%,$len(%_playlist_total%)). Returns the position of the track as index into the playlist. The first track has index 1. The index is padded from the left with zeroes to the same number of digits as the last track. [edit]Control flow The functions in this section can be used to conditionally execute statements. [edit][...] (conditional section) Evaluates the expression between [ and ]. If it has the truth value true, its string value and the truth value true are returned. Otherwise an empty string and false are returned. Example [%artist%] returns the value of the artist tag, if it exists. Otherwise it returns nothing, when artist would return "?". [edit]$if(cond,then) If cond evaluates to true, the then part is evaluated and its value returned. Otherwise, false is returned. [edit]$if(cond,then,else) If cond evaluates to true, the then part is evaluated and its value returned. Otherwise, the else part is evaluated and its value returned. [edit]$if2(a,else) Like $if(a,a,else) except that a is only evaluated once. [edit]$if3(a1,a2,...,aN,else) Evaluates arguments a1 ... aN, until one is found that evaluates to true. If that happens, its value is returned. Otherwise the else part is evaluated and its value returned. [edit]$ifgreater(n1,n2,then,else) Compares the integer numbers n1 and n2, if n1 is greater than n2, the then part is evaluated and its value returned. Otherwise the else part is evaluated and its value returned. [edit]$iflonger(s1,s2,then,else) Compares the length of the strings s1 and s2, if s1 is longer than s2, the then part is evaluated and its value returned. Otherwise the else part is evaluated and its value returned. [edit]$select(n,a1,...,aN) If the value of n is between 1 and N, an is evaluated and its value returned. Otherwise false is returned. [edit]Arithmetic functions The functions in this section can be used to perform arithmetic on integer numbers. A string will be automatically converted to a number and vice versa. The conversion to a number uses the longest prefix of the string, that can be interpreted as number. Leading whitespace is ignored. Example "c3po" - 0, " -12" - -12, but "- 12" - 0 [edit]$add(a,b) Adds a and b. Can be used with an arbitrary number of arguments. $add(a,b,...) is the same as $add($add(a,b),...). [edit]$div(a,b) Divides a through b. If b evaluates to zero, it returns a. Can be used with an arbitrary number of arguments. $div(a,b,...) is the same as $div($div(a,b),...). [edit]$greater(a,b) Returns true, if a is greater than b, otherwise false. [edit]$max(a,b) Returns the maximum of a and b. Can be used with an arbitrary number of arguments. $max(a,b,...) is the same as $max($max(a,b),...). [edit]$min(a,b) Returns the minimum of a and b. Can be used with an arbitrary number of arguments. $min(a,b,...) is the same as $min($min(a,b),...). [edit]$mod(a,b) Computes the remainder of dividing a through b. The result has the same sign as a. If b evaluates to zero, the result is a. Can be used with an arbitrary number of arguments. $mod(a,b,...) is the same as $mod($mod(a,b),...). [edit]$mul(a,b) Multiplies a and b. Can be used with an arbitrary number of arguments. $mul(a,b,...) is the same as $mul($mul(a,b),...). [edit]$muldiv(a,b,c) Multiplies a and b, then divides by c. The result is rounded to the nearest integer. [edit]$rand() Generates a random number in the range from 0 to 232-1. [edit]$sub(a,b) Subtracts b from a. Can be used with an arbitrary number of arguments. $sub(a,b,...) is the same as $sub($sub(a,b),...). [edit]Boolean functions The functions in this section can be used to work with truth values (true and false), which have no explicit representation in titleformat scripts. They do not return a string or number value. You can use them for more complex conditions with $if and related functions. [edit]$and(...) Logical And of an arbitrary number of arguments. Returns true, if and only if all arguments evaluate to true. Special case $and(x,y) is true, if both x and y are true. Otherwise it is false. [edit]$or(...) Logical Or of an arbitrary number of arguments. Returns true, if at least one argument evaluates to true. Special case $or(x,y) is true, if x or y is true, or if both are true. Otherwise it is false. [edit]$not(x) Logical Not. Returns false, if x is true, otherwise it returns true. [edit]$xor(...) Logical Exclusive-or of an arbitrary number of arguments. Returns true, if an odd number of arguments evaluate to true. Special case $xor(x,y) is true, if one of x and y is true, but not both. Otherwise it is false. [edit]Color functions [edit]$blend(color1,color2,part,total) Returns a color that is a blend between color1 and color2. If part is smaller than or equal to zero, color1 is returned. If part is greater than or equal to total, color2 is returned. Otherwise a blended color is returned that is part parts color1 and total-part parts color2. The blending is performed in the RGB color space. [edit]$rgb() Resets the text color to the default color. [edit]$rgb(r,g,b) Sets the color for text. r, g and b are the red, green and blue component of the color for unselected text. The color for selected text is set to the inverse color. [edit]$rgb(r1,g1,b1,r2,g2,b2) Sets the color for text. r1, g1 and b1 are the red, green and blue component of the color for unselected text. r2, g2 and b2 are the red, green and blue component for the color of selected text. [edit]$transition(string,color1,color2) Inserts color codes into string, so that the first character has color1, the last character has color2, and intermediate characters have blended colors. The blending is performed in the RGB color space. Note that color codes are additional characters that will also be counted by string manipulation functions. For example, if you need to truncate a string, you should do this before applying $transition. [edit]Now playing info The following functions and fields are usable for scripts used with the currently playing item, for example the status bar, the main window title and the copy command script. [edit]Special fields [edit]%_time_elapsed% Returns elapsed time. [edit]%_time_remaining% Returns remaining time until track ends. [edit]%_time_total% Returns total length of track. [edit]%_time_elapsed_seconds% Returns elapsed time in seconds. [edit]%_time_remaining_seconds% Returns remaining time in seconds. [edit]%_time_total_seconds% Returns total track length in seconds. [edit]%_ispaused% Returns "1" if playback is paused and empty string otherwise. [edit]Playlist info The following functions and fields are usable for playlist scripts. [edit]Special fields [edit]%isplaying% Returns "1" if file is currently playing and empty string otherwise. The old version %_isplaying% still works. [edit]%_ispaused% Returns "1" if playback is paused, empty string otherwise. [edit]%_playlist_number% Returns playlist index of specified item. The first item is at index 1. Also see %playlist_number%. [edit]%_playlist_total% Returns number of items in the playlist. [edit]%playlist_name% Returns the name of the playlist containing the specified item. The old version %_playlist_name% still works. [edit]String functions The functions in this section can be used to manipulate character strings. [edit]$abbr(x) Returns abbreviation of x. [edit]$abbr(x,len) Returns abbreviation of x, if x is longer than len characters, otherwise returns x. [edit]$ansi(x) Converts x to system codepage and back. Any characters that are not present in the system codepage will be removed / replaced. Useful for mass-renaming files to ensure compatibility with non-unicode-capable software. [edit]$caps(x) Converts first letter in every word of x to uppercase, and all other letters to lowercase. [edit]$caps2(x) Converts first letter in every word of x to uppercase, and leaves all other letters as they are. [edit]$char(x) Inserts Unicode character with code x. [edit]$crlf() Inserts end-of-line marker (carriage return, line feed). Can be used to generate multiple lines in the output, for example for the tooltip of the system notification area ("systray") icon. [edit]$cut(a,len) Returns first len characters on the left of a. [edit]$directory(x) Extracts directory name from the file path x. [edit]$directory(x,n) Extracts directory name from the file path x; goes up by n levels. [edit]$ext(x) Extracts file extension from x which must be a file name or path. [edit]$filename(x) Extracts file name from full path. [edit]$fix_eol(x) If x contains an end-of-line marker (CR-LF), the end-of-line marker and all text to the right of it is replaced by " (...)". Otherwise x is returned unaltered. [edit]$fix_eol(x,indicator) If x contains an end-of-line marker (CR-LF), the end-of-line marker and all text to the right of it is replaced by indicator. Otherwise x is returned unaltered. [edit]$hex(n) Formats the integer number n in hexadecimal notation. [edit]$hex(n,len) Formats the integer number n in hexadecimal notation with len digits. Pads with zeros from the left if necessary. [edit]$insert(a,b,n) Inserts b into a after n characters. [edit]$left(a,len) Returns the first len characters from the left of a. [edit]$len(a) Returns length of string a in characters. [edit]$len2(a) Returns length of string x in characters, respecting double-width character rules (double-width characters will be counted as two). [edit]$longer(a,b) Returns true, if string a is longer than string b, false otherwise. [edit]$lower(a) Converts a to lowercase. [edit]$longest(a,...) Returns the longest of its arguments. Can be used with an arbitrary number of strings. [edit]$num(n,len) Formats the integer number n in decimal notation with len digits. Pads with zeros from the left if necessary. [edit]$pad(x,len) Pads x from the left with spaces to len characters. [edit]$pad_right(x,y) Pads x from the right with spaces to len characters. [edit]$pad(x,len,char) Pads x from the left with char to len characters. [edit]$pad_right(x,len,char) Pads x from the right with char to len characters. [edit]$padcut(x,len) Returns first len characters from the left of x, if x is longer than len characters. Otherwise pads x from the left with spaces to len characters. [edit]$padcut_right(x,len) Returns first len characters from the left of x, if x is longer than len characters. Otherwise pads x from the right with spaces to len characters. [edit]$progress(pos,range,len,a,b) Creates a progress bar pos contains position, range contains range, len progress bar length in characters, a and b are characters to build progress bar with. Example $progress(%_time_elapsed_seconds%, %_time_total_seconds%, 20, # , = ) produces "====#===============", the # character is moving with playback position. [edit]$progress2(pos,range,len,a,b) Creates a progress bar pos contains position, range contains range, len progress bar length in characters, a and b are characters to build progress bar with. Produces different appearance than $progress. [edit]$repeat(a,n) Returns n copies of a. Note that a is evaluated once before its value is used, so $repeat cannot be used for loops. [edit]$replace(a,b,c) Replaces all occurrences of string b in string a with string c. Can also be used with an arbitrary number of arguments. Note that $replace(a,b1,c1,b2,c2) is generally not the same as $replace($replace(a,b1,c1),b2,c2). Example $replace(ab,a,b,b,c) - "bc", $replace($replace(ab,a,b),b,c) - "cc" [edit]$right(a,len) Returns the first len characters from the right of a. [edit]$roman(n) Formats the integer number n in roman notation. [edit]$shortest Returns the shortest of its arguments. Can be used with an arbitrary number of strings. [edit]$strchr(s,c) Finds first occurence of character c in string s. Example $strchr(abca,a) - 1 [edit]$strrchr(s,c) Finds last occurence of character c in string s. Example $strrchr(abca,a) - 4 [edit]$strstr(s1,s2) Finds first occurence of string s2 in string s1. [edit]$strcmp(s1,s2) Performs a case-sensitive comparison of the strings s1 and s2. [edit]$stricmp(s1,s2) Performs a case-insensitive comparison of the strings s1 and s2. [edit]$substr(s,m,n) Returns substring of string s, starting from m-th character and ending at n-th character. [edit]$trim(s) Removes leading and trailing spaces from string s. [edit]$tab() Inserts one tabulator character. [edit]$tab(n) Inserts n tabulator characters. [edit]$upper(s) Converts string s to uppercase. [edit]Track info The functions and fields in this section can be used to access information about tracks. [edit]Metadata [edit]$meta(name) Returns value of tag called name. If multiple values of that tag exist, they are concatenated with ", " as separator. Example $meta(artist) - "He, She, It" [edit]$meta(name,n) Returns value of n-th tag called name. Example $meta(artist,2) - "She" [edit]$meta_sep(name,sep) Returns value of tag called name. If multiple values of that tag exist, they are concatenated with sep as separator. Example $meta_sep(artist, + ) - "He + She + It" [edit]$meta_sep(name,sep,lastsep) Returns value of tag called name. If multiple values of that tag exist, they are concatenated with sep as separator between all but the last two values which are concatenated with lastsep. Example $metasep(artist, , , , and ) - "He, She, and It" [edit]$meta_test(...) Returns true, if all given tags exist. Example $meta_test(artist,title) - true [edit]$meta_num(name) Returns the number of values for the tag called name. Example $meta_num(artist) - 3 [edit]$tracknumber() Returns the tracknumber padded to 2 digits with zeroes. [edit]$tracknumber(n) Returns the tracknumber padded to n digits with zeros. [edit]Technical information [edit]$info(name) Returns value of technical information field called name. Example $info(channels) - 2 [edit]$codec() Returns codec of track. If no codec field is present, it the uses file extension. Example $codec() - "WavPack" [edit]$channels() Returns number of channels in text format. Example $channels() - "stereo" [edit]%__replaygain_album_gain% Returns ReplayGain album gain value. //Not available through// $info(replaygain_album_gain). [edit]%__replaygain_album_peak% Returns ReplayGain album peak value. //Not available through// $info(replaygain_album_peak). [edit]%__replaygain_track_gain% Returns ReplayGain track gain value. //Not available through// $info(replaygain_track_gain). [edit]%__replaygain_track_peak% Returns ReplayGain track peak value. //Not available through// $info(replaygain_track_peak). [edit]Special fields [edit]$extra(name) Returns the value of the special field called name. These fields can also be accessed as %_name%; note the additional underscore. The following field names can be used [edit]filename Returns the filename without directory and extension. [edit]filename_ext Returns the filename with extension, but without the directory. [edit]directoryname Returns the name of the parent directory only, not the complete path. [edit]path Returns the path. [edit]path_raw Returns the path as URL including the protocol scheme. [edit]subsong Returns the subsong index. The subsong index is used to distuingish multiple tracks in a single file, for example for cue sheets, tracker modules and various container formats. [edit]foobar2000_version Returns a string representing the version of foobar2000. [edit]length Returns the length of the track formatted as hours, minutes, and seconds. [edit]length_ex Returns the length of the track formatted as hours, minutes, seconds, and milliseconds. [edit]length_seconds Returns the length of the track in seconds. [edit]length_seconds_fp Returns the length of the track in seconds as floating point number. [edit]length_samples Returns the length of the track in samples. [edit]Variable operations Variables can be used to store strings and number. They cannot store truth values. They are best used to store intermediate results that you need multiple times. Variable names are not case-sensitive. [edit]$get(name) Returns the value that was last stored in the variable name, if the variable was not defined (yet), it returns nothing. The truth value returned by $get indicates if the variable name was defined. [edit]$put(name,value) Stores value in the variable name and returns value unaltered. [edit]$puts(name,value) Stores value in the variable name and returns only the truth value of value. [edit]Component-provided fields and functions on tracks This section lists components that provide additional fields and functions that are useable in the context of any track. [edit]Playback statistics Playback statistics homepage Playback statistics titleformat reference [edit]Component-specific fields and functions This section lists components that provide additional fields and functions that are only useable in the context of the particular component. [edit]Album list The official album list component supports creating multiple tree entries using special commands. Album list homepage Album list titleformat reference [edit]Columns UI Columns UI homepage Global variables reference Playlist colors reference Playlist switcher reference Retrieved from "http //wiki.hydrogenaudio.org/index.php?title=Foobar2000 Titleformat_Reference" ViewsArticleDiscussionEditHistory Personal toolsCreate an account or log in NavigationMain Page Topic Index HAK Community Current events Recent changes Random page Help Donations Search ToolboxWhat links here Related changes Upload file Special pages Printable version This page was last modified 22 23, 14 January 2006. This page has been accessed 13,076 times. Content is available under GNU Free Documentation License 1.2. About Hydrogenaudio Knowledgebase Disclaimers
https://w.atwiki.jp/usb_audio/pages/63.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 16 Offset Field Size Value Description 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subslot. 2.3.1.7 Type I Supported Formats The following paragraphs list all currently supported Type I Audio Data Formats. The bit allocations in the bmFormats field of the class-specific AS interface descriptor for the different Type I Audio Data Formats can be found in Appendix A.2.1, “Audio Data Format Type I Bit Allocations.” 2.3.1.7.1 PCM Format The PCM (Pulse Coded Modulation) format is the most commonly used audio format to represent audio data streams. The audio data is not compressed and uses a signed two’s-complement fixed point format. It is left-justified (the sign bit is the Msb) and data is padded with trailing zeros to fill the remaining unused bits of the subslot. The binary point is located to the right of the sign bit so that all values lie within the range [-1, +1). 2.3.1.7.2 PCM8 Format The PCM8 format is introduced to be compatible with the legacy 8-bit wave format. Audio data is uncompressed and uses 8 bits per sample (bBitResolution = 8). In this case, data is unsigned fixed-point, left-justified in the audio subslot, Msb first. The range is [0,255]. 2.3.1.7.3 IEEE_FLOAT Format The IEEE_FLOAT format is based on the ANSI/IEEE-754 floating-point standard. Audio data is represented using the basic single-precision format. The basic single-precision number is 32 bits wide and has an 8-bit exponent and a 24-bit mantissa. Both mantissa and exponent are signed numbers, but neither is represented in two s-complement format. The mantissa is stored in sign magnitude format and the exponent in biased form (also called excess-n form). In biased form, there is a positive integer (called the bias) which is subtracted from the stored number to get the actual number. For example, in an eight-bit exponent, the bias is 127. To represent 0, the number 127 is stored. To represent -100, 27 is stored. An exponent of all zeroes and an exponent of all ones are both reserved for special cases, so in an eight-bit field, exponents of -126 to +127 are possible. In the basic floating-point format, the mantissa is assumed to be normalized so that the most significant bit is always one, and therefore is not stored. Only the fractional part is stored. Denormalized (exponent = 0) values are considered to be zero. The 32-bit IEEE-754 floating-point word is broken into three fields. The most significant bit stores the sign of the mantissa, the next group of 8 bits stores the exponent in biased form, and the remaining 23 bits store the magnitude of the fractional portion of the mantissa. For further information, refer to the ANSI/IEEE-754 standard. The data is conveyed over USB using 32 bits per sample (bBitResolution = 32; bSubslotSize = 4). 2.3.1.7.4 ALaw Format and μLaw Format Starting from 12- or 16-bits linear PCM samples, simple compression down to 8-bits per sample (one byte per sample) can be achieved by using logarithmic companding. The compressed audio data uses 8 bits per sample (bBitsPerSample = 8). Data is signed fixed point, left-justified in the subslot, Msb first. The compressed range is [-128,128]. The difference between Alaw and μLaw compression lies in the formulae used to achieve the compression. Refer to the ITU G.711 standard for further details. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 17 2.3.1.7.5 Type I Raw Data This audio format is included to allow transport of data (audio or other) over a USB AudioStreaming interface in the form of PCM-like audio slots when the actual format or even the meaning of the transported data is unknown. The USB pipe simply acts as a pass-through. As a consequence, such data can never be interpreted inside the audio function and can only be routed from an Input Terminal to one or more Output Terminals. From a USB standpoint, the data is packed as if it were Type I formatted audio data, but the data is never to be interpreted as being audio data. 2.3.2 Type II Formats Type II formats are used to transmit non-PCM encoded audio data into bit streams that consist of a sequence of encoded audio frames. 2.3.2.1 Encoded Audio Frames An encoded audio frame is a sequence of bits that contains an encoded representation of one or more physical audio channels. The encoding takes place over a fixed number of audio slots. Each encoded audio frame contains enough information to entirely reconstruct the audio samples (albeit not lossless), encoded in the encoded audio frame. No information from adjacent encoded audio frames is needed during decoding. The number of audio slots used to construct one encoded audio frame depends on the encoding scheme. (For MPEG, the number of slots per encoded audio frame (nf) is 384 for Layer I or 1152 for Layer II. For AC-3, the number of slots is 1536.) In most cases, the encoded audio frame represents multiple physical audio channels. The number of bits per encoded audio frame may be variable. The content of the encoded audio frame is defined according to the implemented encoding scheme. Where applicable, the bit ordering shall be MSB first, relative to existing standards of serial transmission or storage of that encoding scheme. An encoded audio frame represents an interval longer than the USB (micro)frame. This is typical of audio compression algorithms that use psycho-acoustic or vocal tract parametric models. cite(Note} It is important to make a clear distinction between a USB frame and an encoded audio frame. The overloaded use of the term frame could cause confusion. Therefore, this specification will always use the qualifier ‘encoded audio’ to refer to MPEG or AC-3 encoded audio frames. 2.3.2.2 Audio Bit Streams An encoded audio bit stream is a concatenation of a potentially very large number of encoded audio frames, ordered according to ascending time. Subsequent encoded audio frames are independent and can be decoded separately. 2.3.2.3 USB Packets Encoded audio bit streams are packetized when transported over an isochronous pipe. Each virtual frame packet potentially contains only part of a single encoded audio frame. Packet sizes are determined according to the short-packet protocol. The encoded audio frame is broken down into a number of packets, each containing wMaxPacketSize bytes except for the last packet, which may be smaller and contains the remainder of the encoded audio frame. If the MaxPacketsOnly bit D7 in the bmAttributes field of the class-specific endpoint descriptor is set, the last (short) packet must be padded with zero bytes to wMaxPacketSize length. No virtual frame packet may contain bits belonging to different encoded audio frames. If the encoded audio frame length is not a multiple of 8 bits, the last byte in the last packet is padded with zero bits. The decoder must ignore all padded extra bits and bytes. Consecutive encoded audio frames are separated by at least one Transfer Delimiter. A Transfer Delimiter must be sent in all virtual frames until the next encoded audio frame is due. The above rules guarantee that a new encoded audio frame always starts on a virtual frame packet boundary. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 18 2.3.2.4 Bandwidth Allocation The encoded audio frame time tf equals the number of audio slots per encoded audio frame nf divided by the sampling rate fs of the original audio samples. ここに画像 The allocated bandwidth for the pipe must accommodate for the largest possible encoded audio frame to be transmitted within an encoded audio frame time. This should take into account the Transfer Delimiter requirement and any differences between the time base of the stream and the USB (micro)frame timer. The device may choose to consume more bandwidth than necessary (by increasing the reported wMaxPacketSize) to minimize the time needed to transmit an entire encoded audio frame. This can be used to enable early decoding and therefore minimize system latency. 2.3.2.5 Timing The timing reference point is the beginning of an encoded audio frame. Therefore, the USB packet that contains the first bits (usually the encoded audio frame sync word) of the encoded audio frame is used as a timing reference in USB space. This USB packet is called the reference packet. The transmission of the reference packet of an encoded audio frame should begin at the target playback time of that frame (minus the endpoint’s reported delay) rounded to the nearest USB (micro)frame time. This guarantees that, at the receiving end, the arrival of subsequent reference packets matches the encoded audio frame time tf as closely as possible. 2.3.2.6 Type II Format Type Descriptor The Type II Format Type descriptor starts with the usual three fields bLength, bDescriptorType and bDescriptorSubtype. The bFormatType field indicates this is a Type II descriptor. The wMaxBitRate field contains the maximum number of bits per second this interface can handle. It is a measure for the buffer size available in the interface. The wSlotsPerFrame field contains the number of PCM audio slots contained within a single encoded audio frame. Table 2-3 Type II Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 8 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSlotsPerFrame 2 Number Indicates the number of PCM audio slots contained in one encoded audio frame. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 19 2.3.2.7 Rate feedback If the isochronous data endpoint needs explicit rate feedback (adaptive source, asynchronous sink), the feedback pipe must report the number of equivalent PCM audio slots. The host will accumulate this data and start transmission of an encoded audio frame whenever the current number of audio slots exceeds the number of slots per encoded audio frame. The remainder is kept in the accumulator. 2.3.2.8 Type II Supported Formats The following sections list all currently supported Type II Audio Data Formats. The bit allocations in the bmFormats field of the class-specific AS interface descriptor for the different Type II Audio Data Formats can be found in Appendix A.2.2, “Audio Data Format Type II Bit Allocations.” 2.3.2.8.1 MPEG Format Refer to the ISO/IEC 11172-3 1993 “Information technology -- Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s -- Part 3 Audio” and the ISO/IEC 13818-3 1998 “Information technology -- Generic coding of moving pictures and associated audio information -- Part 3 Audio” specifications for detailed format information. 2.3.2.8.2 AC-3 Format Refer to the Digital Audio Compression Standard (AC-3), ATSC A/52A Aug. 20, 2001 for detailed format information. 2.3.2.8.3 WMA Format This is an audio compression format from Microsoft. For technical and licensing information, contact Microsoft directly (http //www.microsoft.com/windows/windowsmedia/default.aspx). 2.3.2.8.4 DTS Format Refer to the ETSI Specification TS 102 114, “DTS Coherent Acoustics; Core and Extensions”. Available from http //webapp.etsi.org/action%5CPU/20020827/ts_102114v010101p.pdf. 2.3.2.8.5 Type II Raw Data This audio format is included to allow transport of data (audio or other) over a USB AudioStreaming interface in the form of a bit stream when the actual format or even the meaning of the transported data is unknown. The USB pipe simply acts as a pass-through. As a consequence, such data can never be interpreted inside the audio function and can only be routed from an Input Terminal to one or more Output Terminals. From a USB standpoint, the data is packed as if it were Type II formatted audio data, but the data is never to be interpreted as being audio data. 2.3.3 Type III Formats These formats are based upon the IEC61937 standard. The IEC61937 standard describes a method to transfer non-PCM encoded audio bit streams over an IEC60958 digital audio interface, together with the transfer of the accompanying “Channel Status” and “User Data.” The IEC60958 standard specifies a widely used method of interconnecting digital audio equipment with two-channel linear PCM audio. The IEC61937 standard describes a way in which the IEC60958 interface must be used to convey non-PCM encoded audio bit streams for consumer applications. The same basic techniques used in IEC61937 are reused here to convey non-PCM encoded audio bit streams over a Type III formatted audio stream. From a USB transfer standpoint, the data streaming over the interface looks exactly like two-channel 16 bit PCM audio data. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 20 2.3.3.1 Type III Format Type Descriptor The bFormatType field indicates this is a Type III descriptor. The bSubSlotSize field indicates how many bytes are used to transport an audio subslot. The bBitResolution field indicates how many bits of the total number of available bits in the audio subslot are truly used by the audio function to convey audio information. Table 2-4 Type III Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 6 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_III. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Must be set to two. 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subframe. 2.3.3.2 Type III Supported Formats Refer to the ISO/IEC 60958 and ISO/IEC 61937 (several parts) specifications for detailed format information. The bit allocations in the bmFormats field of the class-specific AS interface descriptor for the different Type III Audio Data Formats can be found in Appendix A.2.3, “Audio Data Format Type III Bit Allocations.” The following is a list of formats that is covered or will be covered by the above specifications. • IEC61937_AC-3 • IEC61937_MPEG-1_Layer1 • IEC61937_MPEG-1_Layer2/3 or IEC61937_MPEG-2_NOEXT • IEC61937_MPEG-2_EXT • IEC61937_MPEG-2_AAC_ADTS • IEC61937_MPEG-2_Layer1_LS • IEC61937_MPEG-2_Layer2/3_LS • IEC61937_DTS-I • IEC61937_DTS-II • IEC61937_DTS-III • IEC61937_ATRAC • IEC61937_ATRAC2/3 In addition, the WMA audio compression format as defined by Microsoft is supported. 2.3.4 Type IV Formats Type IV formats can only be used on external connections to the audio function that do not use a USB pipe for their data transport but that do need an AudioStreaming interface to control an encoder or decoder process in one or more of its Alternate Settings. A typical example of such a connection is an S/PDIF connector that is capable of handling both PCM stereo audio data streams (IEC60958) in one Alternate Release 2.0 May 31, 2006 20 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/22.html
原文:Audio Data Formats 1.0(PDF) USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 11 Offset Field Size Value Description 8 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 11 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-3 Discrete Number of Sampling Frequencies Offset Field Size Value Description 8 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … 8+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.2.6 Supported Formats The following paragraphs list all currently supported Type I Audio Data Formats. 2.2.6.1 PCM Format The PCM (Pulse Coded Modulation) format is the most commonly used audio format to represent audio data streams. The audio data is not compressed and uses a signed two’s-complement fixed point format. It is left-justified (the sign bit is the Msb) and data is padded with trailing zeros to fill the remaining unused bits of the subframe. The binary point is located to the right of the sign bit so that all values lie within the range [-1,+1). 2.2.6.2 PCM8 Format The PCM8 format is introduced to be compatible with the legacy 8-bit wave format. Audio data is uncompressed and uses 8 bits per sample (bBitResolution = 8). In this case, data is unsigned fixed-point, left-justified in the audio subframe, Msb first. The range is [0,255]. 2.2.6.3 IEEE_FLOAT Format The IEEE_FLOAT format is based on the ANSI/IEEE-754 floating-point standard. Audio data is represented using the basic single-precision format. The basic single-precision number is 32 bits wide and has an 8-bit exponent and a 24-bit mantissa. Both mantissa and exponent are signed numbers, but neither is represented in two s-complement format. The mantissa is stored in sign magnitude format and the exponent in biased form (also called excess-n form). In biased form, there is a positive integer (called the bias) which is subtracted from the stored number to get the actual number. For example, in an eight-bit exponent, the bias is 127. To represent 0, the number 127 is stored. To represent -100, 27 is stored. An USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 12 exponent of all zeroes and an exponent of all ones are both reserved for special cases, so in an eight-bit field, exponents of -126 to +127 are possible. In the basic floating-point format, the mantissa is assumed to be normalized so that the most significant bit is always one, and therefore is not stored. Only the fractional part is stored. The 32-bit IEEE-754 floating-point word is broken into three fields. The most significant bit stores the sign of the mantissa, the next group of 8 bits stores the exponent in biased form, and the remaining 23 bits store the magnitude of the fractional portion of the mantissa. For further information, refer to the ANSI/IEEE-754 standard. The data is conveyed over USB using 32 bits per sample (bBitResolution = 32; bSubframeSize = 4). 2.2.6.4 ALaw Format and mLaw Format Starting from 12- or 16-bits linear PCM samples, simple compression down to 8-bits per sample (one byte per sample) can be achieved by using logarithmic companding. The compressed audio data uses 8 bits per sample (bBitsPerSample = 8). Data is signed fixed point, left-justified in the subframe, Msb first. The compressed range is [-128,128]. The difference between Alaw and mLaw compression lies in the formulae used to achieve the compression. Refer to the ITU G.711 standard for further details. 2.3 Type II Formats Type II formats are used to transmit non-PCM encoded audio data into bitstreams that consist of a sequence of encoded audio frames. 2.3.1 Encoded Audio Frames An encoded audio frame is a sequence of bits that contains an encoded representation of one or more physical audio channels. The encoding takes place over a fixed number of audio samples. Each encoded audio frame contains enough information to entirely reconstruct the audio samples (albeit not lossless), encoded in the encoded audio frame. No information from adjacent encoded audio frames is needed during decoding. The number of samples used to construct one encoded audio frame depends on the encoding scheme. (For MPEG, the number of samples per encoded audio frame (nf) is 384 for Layer I or 1152 for Layer II. For AC-3, the number of samples is 1536.) In most cases, the encoded audio frame represents multiple physical audio channels. The number of bits per encoded audio frame may be variable. The content of the encoded audio frame is defined according to the implemented encoding scheme. Where applicable, the bit ordering shall be MSB first, relative to existing standards of serial transmission or storage of that encoding scheme. An encoded audio frame represents an interval longer than the USB frame time of 1 ms. This is typical of audio compression algorithms that use psycho-acoustic or vocal tract parametric models. Note It is important to make a clear distinction between an audio frame (see Section 2.2.3, “Audio Frame”) and an encoded audio frame. The overloaded use of the term audio frame could cause confusion. Therefore, this specification will always use the qualifier ‘encoded’ to refer to MPEG or AC-3 encoded audio frames. 2.3.2 Audio Bitstreams An encoded audio bitstream is a concatenation of a potentially very large number of encoded audio frames, ordered according to ascending time. Subsequent encoded audio frames are independent and can be decoded separately. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 13 2.3.3 USB Packets Encoded audio bitstreams are packetized when transported over an isochronous pipe. Each USB packet contains only part of a single encoded audio frame. Packet sizes are determined according to the shortpacket protocol. The encoded audio frame is broken down into a number of packets, each containing wMaxPacketSize bytes except for the last packet, which may be smaller and contains the remainder of the encoded audio frame. If the MaxPacketsOnly bit D7 in the bmAttributes field of the class-specific endpoint descriptor is set, the last (short) packet must be padded with zero bytes to wMaxPacketSize length. No USB packet may contain bits belonging to different encoded audio frames. If the encoded audio frame length is not a multiple of 8 bits, the last byte in the last packet is padded with zero bits. The decoder must ignore all padded extra bits and bytes. Consecutive encoded audio frames are separated by at least one Transfer Delimiter. A Transfer Delimiter must be sent in all consecutive USB frames until the next encoded audio frame is due. The above rules guarantee that a new encoded audio frame always starts on a USB packet boundary. 2.3.4 Bandwidth Allocation The encoded audio frame time tf equals the number of audio samples per encoded audio frame nf divided by the sampling rate fs of the original audio samples. 数式 The allocated bandwidth for the pipe must accommodate for the largest possible encoded audio frame to be transmitted within an encoded audio frame time. This should take into account the Transfer Delimiter requirement and any differences between the time base of the stream and the USB frame timer. The device may choose to consume more bandwidth than necessary (by increasing the reported wMaxPacketSize) to minimize the time needed to transmit an entire encoded audio frame. This can be used to enable early decoding and therefore minimize system latency. 2.3.5 Timing The timing reference point is the beginning of an encoded audio frame. Therefore, the USB packet that contains the first bits (usually the encoded audio frame sync word) of the encoded audio frame is used as a timing reference in USB space. This USB packet is called the reference packet. The transmission of the reference packet of an encoded audio frame should begin at the target playback time of that frame (minus the endpoint’s reported delay) rounded to the nearest USB frame time. This guarantees that, at the receiving end, the arrival of subsequent reference packets matches the encoded audio frame time tf as closely as possible. 2.3.6 Type II Format Type Descriptor The Type II Format Type descriptor starts with the usual three fields bLength, bDescriptorType and bDescriptorSubtype. The bFormatType field indicates this is a Type II descriptor. The wMaxBitRate field contains the maximum number of bits per second this interface can handle. It is a measure for the buffer size available in the interface. The wSamplesPerFrame field contains the number of non-PCM encoded audio samples contained within a single encoded audio frame The sampling frequency capabilities of the endpoint are reported using the bSamFreqType field andfollowing fields. Table 2-4 Type II Format Type Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 14 Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9+(ns*3) 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSamplesPerFrame 2 Number Indicates the number of PCM audio samples contained in one encoded audio frame. 8 bSamFreqType 1 Number Indicates how the sampling frequency can be programmed 0 Continuous sampling frequency1..255 The number of discrete sampling frequencies supported by the isochronous data endpoint of the AudioStreaming interface (ns) 9... See sampling frequency tables, below. Depending on the value in the bSamFreqType field, the layout of the next part of the descriptor is as shown in the following tables. Table 2-5 Continuous Sampling Frequency Offset Field Size Value Description 9 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 12 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-6 Discrete Number of Sampling Frequencies Offset Field Size Value Description 9 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 15 Offset Field Size Value Description 9+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.3.7 Rate feedback If the isochronous data endpoint needs explicit rate feedback (adaptive source, asynchronous sink), the feedback pipe shall report the number of equivalent PCM audio samples. The host will accumulate this data and start transmission of an encoded audio frame whenever the current number of samples exceeds the number of samples per encoded audio frame. The remainder is kept in the accumulator. 2.3.8 Supported Formats The following sections list all currently supported Type II Audio Data Formats. Format-specific descriptors and format-specific requests are explained in more detail. 2.3.8.1 MPEG Format In the current specification, only MPEG decoding aspects are considered. Real-time MPEG encoding peripherals are not (yet) available and consequently are not covered by this specification. 2.3.8.1.1 MPEG Format-Specific Descriptor The wFormatTag field is a duplicate of the wFormatTag field in the class-specific AudioStreaming interface descriptor. The same field is used here to identify the format-specific descriptor. The bmMPEGCapabilities bitmap field describes the capabilities of the MPEG decoder built into the AudioStreaming interface. Some general information must be retrieved from the Format Type-specific descriptor. For instance, the sampling frequencies supported by the decoder are reported through the Format Type-specific descriptor. This includes the ability of the decoder to handle low sampling frequencies (16 kHz, 22.05 kHz and 24 kHz) besides the standard 32 kHz, 44.1 kHz and 48 kHz sampling frequencies. Bits D2..0 of the bmMPEGCapabilities field are used to indicate which layers this decoder is capable of processing. The different layers relate to the different algorithms that are used during encoding and decoding. Bit D3 indicates that the decoder can only process the MPEG-1 base stream. Therefore, only Left and Right channels will be output. Bit D4 indicates that the decoder can handle MPEG-2 streams that contain two independent stereo pairs instead of the normal 3/2 encoding scheme. This bit is only applicable for MPEG-2 decoders. Bit D5 indicates that the decoder supports the MPEG dual channel mode. In this case, the MPEG-1 base stream does not contain Left and Right channels of a stereo pair but instead contains two independent mono channels. One of these channels can be selected through the proper request (Dual Channel Control) and reproduced over the Left and Right output channels simultaneously. Bit D6 indicates that the decoder supports the DVD MPEG-2 augmentation to 7.1 channels instead of the standard 5.1 channels. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/62.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 11 2 Audio Data Formats Audio Data formats can be divided in two main groups • Simple Audio Data Formats • Extended Audio Data Formats Simple Audio Data Formats can then be subdivided into four groups according to type. The first group, Type I, deals with audio data streams that are transmitted over USB and are constructed on a sample-by-sample basis. Each audio sample is represented by a single independent symbol, contained in an audio subslot. Different compression schemes may be used to transform the audio samples into symbols. Note This is different from encoding. Compression is considered to take place on a per-audio-sample base. Each audio sample generates one symbol (e.g. A-law compression where a 16-bit audio sample is compressed into an 8 bit symbol). If multiple physical audio channels are formatted into a single audio channel cluster, then samples at time x of subsequent channels are first contained into audio subslots. These audio subslots are then interleaved, according to the cluster channel ordering as described in the main USB Audio Specification, and then grouped into an audio slot. The audio samples, taken at time x+1, are interleaved in the same fashion to generate the next audio slot and so on. The notion of physical channels is explicitly preserved during transmission. A typical example of Type I formats is the standard PCM audio data. The following figure illustrates the concept. ここに画像 Figure 2-1 Type I Audio Stream The second group, Type II, deals with those formats that do not preserve the notion of physical channels during the transmission over USB. Typically, all non-PCM encoded audio data streams belong to this group. A number of audio samples, often originating from multiple physical channels and taken over a certain period of time, are encoded into a number of bits in such a way that, after transmission, the original audio samples can be reconstructed to a certain degree of accuracy. The number of bits used for transmission is typically one or more orders of magnitude smaller than the number of bits needed to represent the original PCM audio samples, effectively realizing a considerable bandwidth reduction during transmission. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 12 ここに画像 Figure 2-2 Type II Audio Stream The third group, Type III, contains special formats that do not fit in both previous groups. In fact, they mix characteristics of Type I and Type II groups to transmit audio data streams over USB. One or more non-PCM encoded audio data streams are packed into “pseudo-stereo samples” and transmitted as if they were real stereo PCM audio samples. The sampling frequency of these pseudo samples matches the sampling frequency of the original PCM audio data streams. Therefore, clock recovery at the receiving end is easier than it is in the case of Type II formats. The drawback is that unless multiple non-PCM encoded streams are packed into one pseudo stereo stream, more bandwidth than necessary is consumed. The fourth group, Type IV, deals with audio streams that are not transmitted over USB. Instead, they interface with the audio function through an AudioStreaming interface that does not contain a USB isochronous IN or OUT endpoint. These streams typically connect via a digital interface like S/PDIF (or some other means of connectivity) but require interaction from the Host before they enter or leave the audio function. A typical example is an external S/PDIF connector that can accept an AC-3 encoded audio stream. This stream is first processed by an AC-3 decoder before the (decoded) logical audio channels enter the audio function through the Input Terminal that represents this S/PDIF connection. The capabilities of the AC-3 decoder are advertised by means of the AC-3 Decoder descriptor and the decoder Controls can be programmed through the AudioStreaming interface. In addition to the Simple Audio Data Formats described above, Extended Audio Data Formats are defined. These are based on the Simple Audio Data Formats Type I, II, and III definitions but they provide an optional packet header and for the Extended Audio Data Format Type I, an optional synchronous (i.e. sample accurate) control channel. Type IV Audio Data Formats do not have an Extended Audio Data Format definition. Section A.1, “Format Type Codes” summarizes the Audio Data Formats that are currently supported by the Audio Device Class. The following sections explain those formats in more detail. 2.1 Transfer Delimiter Isochronous data streams are continuous in nature, although the actual number of bytes sent per packet may vary throughout the lifetime of the stream (for rate adaptation purposes for instance). To indicate a temporary stop in the isochronous data stream without closing the pipe (and thus relinquishing the USB Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 13 bandwidth), an in-band Transfer Delimiter needs to be defined. This specification considers two situations to be a Transfer Delimiter. The first is a zero-length data packet and the second is the absence of an isochronous transfer in a USB (micro)frame that would normally have an isochronous transfer. Both situations are considered equivalent and the audio function is expected to behave the same. However, the second type consumes less isochronous USB bandwidth (i.e. zero bandwidth). In both cases, this specification considers a Transfer Delimiter to be an entity that can be sent over the USB. 2.2 Virtual Frame and Virtual Frame Packet Definitions To better describe packetization for audio the concept of a “virtual frame” (VF) is introduced. A virtual frame is defined as VF = (micro)frame * 2(bInterval-1) In addition, a “virtual frame packet” (VFP) is introduced. A virtual frame packet is defined as a packet that contains all the samples that are transferred over the bus during a virtual frame. For full-/high-speed endpoints, the virtual frame packets are exactly the same as the physical packets that are transferred over USB. However, for high-speed high-bandwidth endpoints, the virtual frame packet is the concatenation of the two or three physical packets that are transferred over the bus in a microframe. Note The USB Specification already considers the 2 or 3 transactions of a high-speed high-bandwidth transfer to be part of a single packet. See Section 5.12.3, “Clock Synchronization” The above definitions provide a model of ‘one (virtual frame) packet per (virtual) frame’, irrespective of the actual transactions on the USB. 2.3 Simple Audio Data Formats 2.3.1 Type I Formats The following sections describe the Audio Data Formats that belong to Type I. A number of terms and their definition are presented. 2.3.1.1 USB Packets Audio data streams that are inherently continuous must be packetized when sent over the USB. The quality of the packetizing algorithm directly influences the amount of effort needed to reconstruct a reliable sample clock at the receiving side. The goal must be to keep the instantaneous number of audio slots per virtual frame, ni as close as possible to the average number of audio slots per virtual frame, nav. The average nav must be calculated as follows ここに数式 where TVF is the duration of a virtual frame and Δt is the sample time (1/FS). In most cases nav will be a number with a fractional part. If the sampling rate is a constant, the allowable variation on ni is limited to one audio slot, that is, Δni = 1. This implies that all virtual frame packets must either contain INT(nav ) audio slots (small VFP) or INT(nav) + 1 (large VFP) audio slots. For all i ni = INT(nav) | INT(nav) + 1 Note In the case where nav = INT(nav), ni may vary between INT(nav) - 1 (small VFP), INT(nav) (medium VFP) and INT(nav) + 1 (large VFP). Furthermore, a large VFP must be generated as soon as it becomes available. Typically, a source will generate a number of small VFPs as long as the accumulated fractional part of nav remains 1. Once the Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 14 accumulated fractional part of nav becomes ≥ 1, the source must send a large VFP and decrement the accumulator by 1. Due to possible different notions of time in the source and the sink (they might each have their own independent sampling clock), the (small VFP)/(large VFP) pattern generated by the source may be different from what the sink expects. Therefore, the sink must be capable to accept a large VFP at all times. Example Assume FS = 44,100 Hz and TVF = 1ms. Then nav = 44.1 audio slots. Since the source can only send an integer number of audio slots per VF, it will send small VFPs of 44 audio slots. Each VF, it therefore sends ‘0.1 slot’ too few and it will accumulate this fractional part in an accumulator. After having sent 9 small VFPs of 44 audio slots, at the tenth VF it will have exactly one audio slot in excess and therefore can send a large VFP containing 45 audio slots. Decrementing the accumulator by 1 brings it back to 0 and the process can start all over again. The source will thus produce a repetitive pattern of 9 small VFPs of 44 audio slots followed by 1 large VFP of 45 audio slots. The following table illustrates the process Table 2-1 Packetization #VF nav ni Fraction Accumulator n 44.1 44 0.1 0.1 n+1 44.1 44 0.1 0.2 n+2 44.1 44 0.1 0.3 n+3 44.1 44 0.1 0.4 n+4 44.1 44 0.1 0.5 n+5 44.1 44 0.1 0.6 n+6 44.1 44 0.1 0.7 n+7 44.1 44 0.1 0.8 n+8 44.1 44 0.1 0.9 n+9 44.1 45 0.1 1.0 - 0 n+10 44.1 44 0.1 0.1 n+11 44.1 44 0.1 0.2 … … … … … *原文は枠線無し 2.3.1.2 Pitch Control If the sampling rate can be varied (to implement pitch control), the allowable variation on ni is limited to one audio slot per virtual frame. For all i ni+1 = ni | ni ± 1 Pitch control is restricted to adaptive endpoints only. AudioStreaming interfaces that support pitch control on their isochronous endpoint are required to report this in the class-specific endpoint descriptor. In addition, a Set/Get Pitch Control request is required to enable or disable the pitch control functionality. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 15 2.3.1.3 Audio Subslot The basic structure used to represent audio data is the audio subslot. An audio subslot holds a single audio sample. An audio subslot always contains an integer number of bytes. This specification limits the possible audio subslot sizes (bSubslotSize) to 1, 2, 3 or 4 bytes per audio subslot. An audio sample is represented using a number of bits (bBitResolution) less than or equal to the total number of bits available in the audio subslot, i.e. bBitResolution ≤ bSubslotSize*8. AudioStreaming endpoints must be constructed in such a way that a valid transfer can take place as long as the reported audio subslot size (bSubslotSize) is respected during transmission. If the reported bits per sample (bBitResolution) do not correspond with the number of significant bits actually used during transfer, the device will either discard trailing significant bits ([actual_bits_per_sample] bBitResolution) or interpret trailing zeros as significant bits ([actual_bits_per_sample] bBitResolution). 2.3.1.4 Audio Slot An audio slot consists of a collection of audio subslots, each containing an audio sample of a different physical audio channel, taken at the same moment in time. The number of audio subslots in an audio slot equals the number of logical audio channels in the audio channel cluster. The ordering of the audio subslots in the audio slot obeys the rules set forth in the USB Audio Specification. All audio subslots must have the same audio subslot size. 2.3.1.5 Audio Streams An audio stream is a concatenation of a potentially very large number of audio slots, ordered according to ascending time. Streams are packetized when transported over USB whereby virtual frame packets can only contain an integer number of audio slots. Each packet always starts with the same channel, and the channel order is respected throughout the entire transmission. If, for any reason, there are no audio slots available to construct a VFP, a Transfer Delimiter must be sent instead. 2.3.1.6 Type I Format Type Descriptor The Type I format type descriptor starts with the usual three fields bLength, bDescriptorType, and bDescriptorSubtype. The bFormatType field indicates this is a Type I descriptor. The bSubslotSize field indicates how many bytes are used to transport an audio subslot. The bBitResolution field indicates how many bits of the total number of available bits in the audio subslot are truly used by the audio function to convey audio information. Table 2-2 Type I Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 6 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_I. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Can be 1, 2, 3 or 4. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/64.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 21 Setting and encoded data streams (IEC61937) in another Alternate Setting of the interface. Note however that the external connection could also be vendor specific (like a parallel data interface). 2.3.4.1 Type IV Format Type Descriptor The bFormatType field indicates this is a Type IV descriptor. Table 2-5 Type IV Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 4 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_IV. Constant identifying the Format Type the AudioStreaming interface is using. 2.3.4.2 Type IV Supported Formats This specification supports all Audio Data Formats on an external connection that are defined on a USB pipe (Type I, II, and III). See Section 2.3.1.7, “Type I Supported Formats”, Section 2.3.2.8, “Type II Supported Formats”, and Section 2.3.3.2, “Type III Supported Formats”. The bit allocations in the bmFormats field of the class-specific AS interface descriptor for the different Type IV Audio Data Formats can be found in Appendix A.2.4, “Audio Data Format Type IV Bit Allocations.” 2.4 Extended Audio Data Formats Extended Audio Data Formats add support for a Packet Header to the previously defined Simple Audio Data Formats Type I, II, and III. For the Extended Audio Data Format Type I, an additional optional synchronous Control Channel is defined. 2.4.1 Extended Type I Formats Extended Audio Data Format Type I adds support for both a Packet Header and a synchronous Control Channel to the Simple Type I Format definition. All three elements (Packet Header, audio data, and Control Channel) of an Extended Type I packet are optional. The Extended Format Type I descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Control Channel, without Packet Header or audio data. The following figure further illustrates the concept. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 22 ここに画像 Figure 2-3 Extended Type I Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by a number of Extended Audio Slots. An Extended Audio Slot is the concatenation of a Control Word, followed by the Type I Audio Slot. The Control Channel therefore consists of a stream of Control Words, where each Control Word is synchronous to its associated Audio Slot. There are as many Control Channel Words per VFP as there are Audio Slots in the VFP. The byte size of the Control Words is independent of the Audio Subslot size and is the same for each Audio Slot. 2.4.1.1 Extended Type I Format Type Descriptor The first part of the Extended Type I Format Type descriptor is identical to the Simple Type I Format Type descriptor (See Section 2.3.1.6, “Type I Format Type Descriptor”.) Three additional fields are added to describe the Packet Header and the Control Channel. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bControlSize field indicates the size in bytes of each Control Channel Word in the stream. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header and Control Channel. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). If the Packet Header is not used, then the bHeaderLength field must be set to 0. Likewise, if the Control Channel is not implemented, then the bControlSize field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-6 Extended Type I Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant EXT_FORMAT_TYPE_I. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Can be 1, 2, 3 or 4. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 23 Offset Field Size Value Description 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subslot. 6 bHeaderLength 1 Number Size of the Packet Header, in bytes. 7 bControlSize 1 Number Size of the Control Channel Words, in bytes. 8 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header and Control Channel content. 2.4.2 Extended Type II Formats Extended Audio Data Format Type II adds support for a Packet Header to the Simple Type II Format definition. The elements (Packet Header and audio data) of an Extended Type II packet are optional. The Extended Format Type II descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Packet Header without audio data. The following figure further illustrates the concept. ここに画像 Figure 2-4 Extended Type II Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by the actual encoded audio frame data. 2.4.2.1 Extended Type II Format Type Descriptor The first part of the Extended Type II Format Type descriptor is identical to the Simple Type II Format Type descriptor (See Section 2.3.2.6, “Type II Format Type Descriptor”.) Two additional fields are added to describe the Packet Header. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). If the Packet Header is not used, then the bHeaderLength field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-7 Extended Type II Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 10 Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 24 Offset Field Size Value Description 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant Ext_FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSamplesPerFrame 2 Number Indicates the number of PCM audio samples contained in one encoded audio frame. 8 bHeaderLength 1 Number Size of the Packet Header, in bytes. 9 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header content. 2.4.3 Extended Type III Formats Extended Audio Data Format Type III adds support for a Packet Header to the Simple Type III Format definition. The elements (Packet Header and audio data) of an Extended Type III packet are optional. The Extended Format Type III descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Packet Header without audio data. The following figure further illustrates the concept. ここに画像 Figure 2-5 Extended Type III Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by the actual encoded audio frame data. 2.4.3.1 Extended Type III Format Type Descriptor The first part of the Extended Type III Format Type descriptor is identical to the Simple Type III Format Type descriptor (See Section 2.3.3.1, “Type III Format Type Descriptor”.) Two additional fields are added to describe the Packet Header. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 25 If the Packet Header is not used, then the bHeaderLength field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-8 Extended Type III Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 8 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant EXT_FORMAT_TYPE_III. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Must be set to two. 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subslot. 6 bHeaderLength 1 Number Size of the Packet Header, in bytes. 7 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header content. 2.4.4 Side Band Protocols This specification currently defines a single Side Band Protocol. Additional Protocols can be added later if needed. 2.4.4.1 Presentation Timestamp Side Band Protocol The Presentation Timestamp protocol only uses the Packet Header to convey high resolution time information over the isochronous pipe. The Packet header is 12 bytes in size. It must occur at the start of each VFP. Bit D0 in the bmFlags field indicates whether this is a valid timestamp (D0 = 0b1) or a repeated or non-valid timestamp (D0 = 0b0). When D0 is set to zero, the time fields of the Packet Header must be ignored. The qNanoSeconds field indicates the time T at which the first sample in the VFP needs to be rendered with respect to the start of the stream (T = 0). The qNanoSeconds field can range from 0 to 263-1 ns (Bit 63 is considered to be a sign bit and must be set to zero). It is up to the entity that generates the timestamp to decide to which accuracy the timestamp will be rendered. Table 2-9 Hi-Res Presentation TimeStamp Layout Offset Field Size Value Description 0 bmFlags 4 Bitmap D30..0 Reserved. Must be set to 0.D31 Valid. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/yoshiumi41/pages/91.html
package hoge; import java.text.DateFormat; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; public class Delivery { public static void main(String[] args) { DateFormat format = new SimpleDateFormat("yyyy/MM/dd"); Date deliveryDate = null; try { deliveryDate = format.parse("2013/6/11"); } catch(ParseException e) { } Date now = new Date(); System.out.println(deliveryDate.getTime()/ 86400000 + 1); System.out.println(now.getTime()/ 86400000); System.out.println("----"); if (deliveryDate.getTime()/ 86400000 + 1 now.getTime()/ 86400000) { System.out.println("エラー"); } long a = 7776000000l; System.out.println(deliveryDate.getTime()); System.out.println(now.getTime()); if (deliveryDate.getTime() - now.getTime() a) { System.out.println("期限エラー"); } } }
https://w.atwiki.jp/yoshiumi41/pages/62.html
package part1.greet; import java.io.IOException;import java.io.PrintWriter;import java.text.DateFormat;import java.text.SimpleDateFormat;import java.util.Date;import java.util.GregorianCalendar; import javax.servlet.ServletException;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse; /** * Servlet implementation class ResultGreetServlet */public class ResultGreetServlet extends HttpServlet {private static final long serialVersionUID = 1L; Date now = new Date(); private final DateFormat year = new SimpleDateFormat("yyyy");private final DateFormat month = new SimpleDateFormat("MM");private final DateFormat day = new SimpleDateFormat("dd");private final int yy = Integer.parseInt(year.format(now));private final int mm = Integer.parseInt(month.format(now)) - 1;private int dd = Integer.parseInt(day.format(now)); // おはようございますprivate final Date morning_after = new GregorianCalendar(yy, mm, dd, 05, 29, 59).getTime();private final Date morning_before = new GregorianCalendar(yy, mm, dd, 11, 00, 00).getTime();// こんにちはprivate final Date daytime_after = new GregorianCalendar(yy, mm, dd, 10, 59, 59).getTime();private final Date daytime_before = new GregorianCalendar(yy, mm, dd, 17, 00, 00).getTime();// こんばんはprivate final Date night_after = new GregorianCalendar(yy, mm, dd, 16, 59, 59).getTime();private final Date night_before = new GregorianCalendar(yy, mm, dd, 21, 30, 00).getTime();// おやすみなさいprivate final Date sleep_today_after = new GregorianCalendar(yy, mm, dd, 21, 29, 59).getTime();private final Date sleep_today_before = new GregorianCalendar(yy, mm, dd + 1, 00, 00, 00).getTime();private final Date sleep_nextday_after = new GregorianCalendar(yy, mm, dd - 1, 23, 59, 59).getTime();private final Date sleep_nextday_before = new GregorianCalendar(yy, mm, dd, 01, 30, 00).getTime(); private String choicePhrase(Date now) throws IsSleepingException { String phrase = null; if (now.after(morning_after) now.before(morning_before)) {phrase = "おはようございます";} else if (now.after(daytime_after) now.before(daytime_before)) {phrase = "こんにちは";} else if (now.after(night_after) now.before(night_before)) {phrase = "こんばんは";} else if (now.after(sleep_today_after) now.before(sleep_today_before)|| now.after(sleep_nextday_after) now.before(sleep_nextday_before)) {phrase = "おやすみなさい";} else {throw new IsSleepingException();} return phrase;} protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException, IOException {request.setCharacterEncoding("UTF-8");response.setContentType("text/html;charset=UTF-8");String name = request.getParameter("name"); if (name.equals("")) {String url = "/j2eepractice/entryGreet.do";response.sendRedirect(url);return;}PrintWriter out = response.getWriter(); DateFormat format = new SimpleDateFormat("yyyy-MM-dd HH mm ss"); out.println(" html head /head body center ");out.println("表示日時 " + format.format(now));out.println(" br/ "); try {out.println(choicePhrase(now) + "、 " + name + "さん。");} catch (IsSleepingException e) {out.println("就寝中です!");}out.println(" /center /body /html "); } }